67 research outputs found

    Synapse-Centric mapping of cortical models to the spiNNaker neuromorphic architecture

    Get PDF
    While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously

    Deep Spiking Neural Network model for time-variant signals classification: a real-time speech recognition approach

    Get PDF
    Speech recognition has become an important task to improve the human-machine interface. Taking into account the limitations of current automatic speech recognition systems, like non-real time cloud-based solutions or power demand, recent interest for neural networks and bio-inspired systems has motivated the implementation of new techniques. Among them, a combination of spiking neural networks and neuromorphic auditory sensors offer an alternative to carry out the human-like speech processing task. In this approach, a spiking convolutional neural network model was implemented, in which the weights of connections were calculated by training a convolutional neural network with specific activation functions, using firing rate-based static images with the spiking information obtained from a neuromorphic cochlea. The system was trained and tested with a large dataset that contains ”left” and ”right” speech commands, achieving 89.90% accuracy. A novel spiking neural network model has been proposed to adapt the network that has been trained with static images to a non-static processing approach, making it possible to classify audio signals and time series in real time.Ministerio de Economía y Competitividad TEC2016-77785-

    Live Demonstration: Multiplexing AER Asynchronous Channels over LVDS Links with Flow-Control and Clock- Correction for Scalable Neuromorphic Systems

    Get PDF
    In this live demonstration we exploit the use of a serial link for fast asynchronous communication in massively parallel processing platforms connected to a DVS for realtime implementation of bio-inspired vision processing on spiking neural networks

    A Real-Time, Event Driven Neuromorphic System for Goal-Directed Attentional Selection

    Get PDF
    Computation with spiking neurons takes advantage of the abstraction of action potentials into streams of stereotypical events, which encode information through their timing. This approach both reduces power consumption and alleviates communication bottlenecks. A number of such spiking custom mixed-signal address event representation (AER) chips have been developed in recent years. In this paper, we present i) a flexible event-driven platform consisting of the integration of a visual AER sensor and the SpiNNaker system, a programmable massively parallel digital architecture oriented to the simulation of spiking neural networks; ii) the implementation of a neural network for feature-based attentional selection on this platfor

    Performance Comparison of Time-Step-Driven versus Event-Driven Neural State Update Approaches in SpiNNaker

    Get PDF
    The SpiNNaker chip is a multi-core processor optimized for neuromorphic applications. Many SpiNNaker chips are assembled to make a highly parallel million core platform. This system can be used for simulation of a large number of neurons in real-time. SpiNNaker is using a general purpose ARM processor that gives a high amount of flexibility to implement different methods for processing spikes. Various libraries and packages are provided to translate a high-level description of Spiking Neural Networks (SNN) to low-level machine language that can be used in the ARM processors. In this paper, we introduce and compare three different methods to implement this intermediate layer of abstraction. We have examined the advantages of each method by various criteria, which can be useful for professional users to choose between them. All the codes that are used in this paper are available for academic propose.EU H2020 grant 644096 ECOMODEEU H2020 grant 687299 NEURAM3Ministry of Economy and Competitivity (Spain) / European Regional Development Fund TEC2015-63884-C2-1-P (COGNET

    Towards Real-World Neurorobotics: Integrated Neuromorphic Visual Attention

    Get PDF
    Neural Information Processing: 21st International Conference, ICONIP 2014, Kuching, Malaysia, November 3-6, 2014. Proceedings, Part IIINeuromorphic hardware and cognitive robots seem like an obvious fit, yet progress to date has been frustrated by a lack of tangible progress in achieving useful real-world behaviour. System limitations: the simple and usually proprietary nature of neuromorphic and robotic platforms, have often been the fundamental barrier. Here we present an integration of a mature “neuromimetic” chip, SpiNNaker, with the humanoid iCub robot using a direct AER - address-event representation - interface that overcomes the need for complex proprietary protocols by sending information as UDP-encoded spikes over an Ethernet link. Using an existing neural model devised for visual object selection, we enable the robot to perform a real-world task: fixating attention upon a selected stimulus. Results demonstrate the effectiveness of interface and model in being able to control the robot towards stimulus-specific object selection. Using SpiNNaker as an embeddable neuromorphic device illustrates the importance of two design features in a prospective neurorobot: universal configurability that allows the chip to be conformed to the requirements of the robot rather than the other way ’round, and stan- dard interfaces that eliminate difficult low-level issues of connectors, cabling, signal voltages, and protocols. While this study is only a building block towards that goal, the iCub-SpiNNaker system demonstrates a path towards meaningful behaviour in robots controlled by neural network chips

    Benchmarking spike-based visual recognition: a dataset and evaluation

    Get PDF
    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organisation have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarks and that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field
    corecore